首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   84398篇
  免费   1031篇
  国内免费   406篇
电工技术   782篇
综合类   2322篇
化学工业   11573篇
金属工艺   4771篇
机械仪表   3036篇
建筑科学   2197篇
矿业工程   562篇
能源动力   1155篇
轻工业   3674篇
水利工程   1275篇
石油天然气   357篇
无线电   9318篇
一般工业技术   16421篇
冶金工业   2689篇
原子能技术   258篇
自动化技术   25445篇
  2023年   28篇
  2022年   44篇
  2021年   65篇
  2020年   53篇
  2019年   47篇
  2018年   14483篇
  2017年   13410篇
  2016年   9995篇
  2015年   629篇
  2014年   262篇
  2013年   269篇
  2012年   3167篇
  2011年   9456篇
  2010年   8298篇
  2009年   5567篇
  2008年   6793篇
  2007年   7792篇
  2006年   134篇
  2005年   1233篇
  2004年   1147篇
  2003年   1182篇
  2002年   537篇
  2001年   101篇
  2000年   182篇
  1999年   63篇
  1998年   72篇
  1997年   39篇
  1996年   56篇
  1995年   18篇
  1994年   20篇
  1993年   15篇
  1992年   19篇
  1991年   26篇
  1988年   14篇
  1969年   25篇
  1968年   43篇
  1967年   33篇
  1966年   42篇
  1965年   44篇
  1964年   11篇
  1963年   28篇
  1962年   22篇
  1961年   18篇
  1960年   30篇
  1959年   35篇
  1958年   37篇
  1957年   36篇
  1956年   34篇
  1955年   63篇
  1954年   68篇
排序方式: 共有10000条查询结果,搜索用时 265 毫秒
991.
The classical operational law of uncertain variables proposed by Liu makes an important contribution to the development of the uncertainty theory in both theories and applications. It provides a powerful and practical approach for calculating the uncertainty distribution of strictly monotone function of uncertain variables. However, the restriction on strictly monotone functions of the operational law limits its applications since many practical problems cannot be modeled by strictly monotone functions but general monotone functions. Therefore, an extension of the original operational law is needed. For this purpose, some properties concerning the uncertainty distributions of monotone functions of uncertain variables as well as the generalized inverse uncertainty distributions are presented first in this paper. On the basis of these discussions, a generalized operational law is proposed as a natural extension of the original operational law. Then the uncertainty distribution of a general monotone function of independent regular uncertain variables can be derived, which is analogous to the way that suggested by the original operational law for dealing with strictly monotone functions. Furthermore, as an application of the generalized operational law, a theorem for calculating the expected values of general monotone functions of uncertain variables is presented as well.  相似文献   
992.
Differential evolution (DE) is a well-known optimization approach to deal with nonlinear and complex optimization problems. However, many real-world optimization problems are constrained problems that involve equality and inequality constraints. DE with constraint handling techniques, named constrained differential evolution (CDE), can be used to solve constrained optimization problems. In this paper, we propose a new CDE framework that uses generalized opposition-based learning (GOBL), named GOBL-CDE. In GOBL-CDE, firstly, the transformed population is generated using general opposition-based learning in the population initialization. Secondly, the transformed population and the initial population are merged and only half of the best individuals are selected to compose the new initial population to proceed mutation, crossover, and selection. Lastly, based on a jumping probability, the transformed population is calculated again after generating new populations, and the fittest individuals are selected to compose new population from the union of the current population and the transformed population. The GOBL-CDE framework can be applied to most CDE variants. As examples, in this study, the framework is applied to two popular representative CDE variants, i.e., rank-iMDDE and \(\varepsilon \)DEag. Experiment results on 24 benchmark functions from CEC’2006 and 18 benchmark functions from CEC’2010 show that the proposed framework is an effective approach to enhance the performance of CDE algorithms.  相似文献   
993.
MOEA/D is one of the promising evolutionary algorithms for multi- and many-objective optimization. To improve the search performance of MOEA/D, this work focuses on the solution update method in the conventional MOEA/D and proposes its alternative, the chain-reaction solution update. The proposed method is designed to maintain and improve the variable (genetic) diversity in the population by avoiding duplication of solutions in the population. In addition, the proposed method determines the order of existing solutions to be updated depending on the location of each offspring in the objective space. Furthermore, when an existing solution in the population is replaced by a new offspring, the proposed method tries to reutilize the existing solution for other search directions by recursively performing the proposed chain-reaction update procedure. This work uses discrete knapsack and continuous WFG4 problems with 2–8 objectives. Experimental results using knapsack problems show the proposed chain-reaction update contributes to improving the search performance of MOEA/D by enhancing the diversity of solutions in the objective space. In addition, experimental results using WFG4 problems show that the search performance of MOEA/D can be further improved using the proposed method.  相似文献   
994.
Web services, which can be described as functionality modules invoked over a network as part of a larger application are often used in software development. Instead of occasionally incorporating some of these services in an application, they can be thought of as fundamental building blocks that are combined in a process known as Web service composition. Manually creating compositions from a large number of candidate services is very time consuming, and developing techniques for achieving this objective in an automated manner becomes an active research field. One promising group of techniques encompasses evolutionary computing, which can effectively tackle the large search spaces characteristic of the composition problem. Therefore, this paper proposes the use of genetic programming for Web service composition, investigating three variations to ensure the creation of functionally correct solutions that are also optimised according to their quality of service. A variety of comparisons are carried out between these variations and two particle swarm optimisation approaches, with results showing that there is likely a trade-off between execution time and the quality of solutions when employing genetic programming and particle swarm optimisation. Even though genetic programming has a higher execution time for most datasets, the results indicate that it scales better than particle swarm optimisation.  相似文献   
995.
Scalability is a main and urgent problem in evolvable hardware (EHW) field. For the design of large circuits, an EHW method with a decomposition strategy is able to successfully find a solution, but requires a large complexity and evolution time. This study aims to optimize the decomposition on large-scale circuits so that it provides a solution for the EHW method to scalability and improves the efficiency. This paper proposes a projection-based decomposition (PD), together with Cartesian genetic programming (CGP) as an EHW system namely PD-CGP, to design relatively large circuits. PD gradually decomposes a Boolean function by adaptively projecting it onto the property of variables, which makes the complexity and number of sub-logic blocks minimized. CGP employs an evolutionary strategy to search for the simple and compact solutions of these sub-blocks. The benchmark circuits from the MCNC library, \(n\)-parity circuits, and arithmetic circuits are used in the experiment to prove the ability of PD-CGP in solving scalability and efficiency. The results illustrate that PD-CGP is superior to 3SD-ES in evolving large circuits in terms of complexity reduction. PD-CGP also outperforms GDD+GA in evolving relatively large arithmetic circuits. Additionally, PD-CGP successfully evolves larger \(n\)-even-parity and arithmetic circuits, which have not done by other approaches.  相似文献   
996.
Minimal attribute reduction plays an important role in rough set. Heuristic algorithms are proposed in literature reviews to get a minimal reduction and yet an unresolved issue is that many redundancy non-empty elements involving duplicates and supersets exist in discernibility matrix. To be able to eliminate the related redundancy and pointless elements, in this paper, we propose a compactness discernibility information tree (CDI-tree). The CDI-tree has the ability to map non-empty elements into one path and allow numerous non-empty elements share the same prefix, which is recognized as a compact structure to store non-empty elements in discernibility matrix. A complete algorithm is presented to address Pawlak reduction based on CDI-tree. The experiment results reveal that the proposed algorithm is more efficient than the benchmark algorithms to find out a minimal attribute reduction.  相似文献   
997.
In this study, a novel online support vector regressor (SVR) controller based on system model estimated by a separate online SVR is proposed. The main idea is to obtain an SVR controller based on an estimated model of the system by optimizing the margin between reference input and system output. For this purpose, “closed-loop margin” which depends on tracking error is defined, then the parameters of the SVR controller are optimized so as to optimize the closed-loop margin and minimize the tracking error. In order to construct the closed-loop margin, the model of the system estimated by an online SVR is utilized. The parameters of the SVR controller are adjusted via the SVR model of system. The stability of the closed-loop system has also been analyzed. The performance of the proposed method has been evaluated by simulations carried out on a continuously stirred tank reactor (CSTR) and a bioreactor, and the results show that SVR model and SVR controller attain good modeling and control performances.  相似文献   
998.
Because most of runoff time series with limited amount of data reveal inherently nonlinear and stochastic characteristics and tend to show chaotic behavior, strategies based on chaotic analysis are popular methods to analyze them from real systems in nonlinear dynamics. Only one kind of predicted method for yearly rainfall-runoff forecasting cannot achieve perfect performance. Thus, a mixture strategy denoted by WT-PSR-GA-NN, which is composed of wavelet transform (WT), phase space reconstruction (PSR), neural network (NN) and genetic algorithm (GA), is presented in this paper. In the WT-PSR-GA-NN framework, the process to deal with time series gathered from Liujiang River runoff data is given as follows: (1) the runoff time series was first decomposed into low-frequency and high-frequency sub-series by wavelet transformation; (2) the two sub-series were separately and independently reconstructed into phase spaces; (3) the transformed time series in the reconstructed phase spaces were modeled by neural network, which is trained by genetic algorithm to avoid trapping into local minima; (4) the predicted results in low-frequency parts were combined with the ones of high-frequency parts, and reconstructed with wavelet inverse transformation, to form the future behavior of the runoff. Experiments show that WT-PSR-GA-NN is effective and its forecasting results are high in accuracy not only for the short-term yearly hydrological time series but also for the long-term one. The comparison results revealed that the overall forecasting performance of WT-PSR-GA-NN proposed by us is superior to other popularity methods for all the test cases. We can conclude that WT-PSR-GA-NN can not only increase the forecasted accuracy, but also its own competitiveness in efficiency, effectiveness and robustness.  相似文献   
999.
The changes of face images with poses and polarized illuminations increase data uncertainty in face recognition. In fact, synthesized mirror samples can be recognized as representations of the left–right deflection of poses or illuminations of the face. Symmetrical face images generated from the original face images also provide more observations of the face which is useful for improving the accuracy of face recognition. In this paper, to the best of our knowledge, it is the first time that the well-known minimum squared error classification (MSEC) algorithm is used to perform face recognition on an extended face database using synthesized mirror training samples, which is titled as extended minimum squared error classification (EMSEC). By modifying the MSE classification rule, we append the mirror samples to the training set for gaining better classification performance. First, we merge original training samples and mirror samples synthesized from original training samples per subject as mixed training samples. Second, EMSEC algorithm exploits mixed training samples to obtain the projection matrix that can best transform the mixed training samples into predefined class labels. Third, the projection matrix is exploited to simultaneously obtain transform results of the test sample and its nearest neighbor from the mixed training sample set. Finally, we ultimately classify the test sample by combining the transform results of the test sample and the nearest neighbor. As an extension of MSEC, EMSEC reduces the uncertainty of the face observation by auxiliary mirror samples, so that it has better robustness classification performance than traditional MSEC. Experimental results on the ORL, GT, and FERET databases show that EMSEC has better generalization ability than traditional MSEC.  相似文献   
1000.
Attributes proof in anonymous credential systems is an effective way to balance security and privacy in user authentication; however, the linear complexity of attributes proof causes the existing anonymous credential systems far away from being practical, especially on resource-limited smart devices. For efficiency considerations, we present a novel pairing-based anonymous credential system which solves the linear complexity of attributes proof based on aggregate signature scheme. We propose two extended signature schemes, BLS+ and BGLS+, to be cryptographical building blocks for constructing anonymous credentials in the random oracle model. Identity-like information of message holder is encoded in a signature in order that the message holder can prove the possession of the input message along with the validity of a signature. We present issuance protocol for anonymous credentials embedding weak attributes which are referred to what cannot identify a user in a population. Users can prove any combination of attributes all at once by aggregating the corresponding individual credentials into one. The attributes proof protocols on AND and OR relation over multiple attributes are also given. The performance analysis shows that the aggregation-based anonymous credential system outperforms both the conventional Camenisch–Lysyanskaya pairing-based system and the accumulator-based system when prove AND and OR relation over multiple attributes, and the size of credential and public parameters are shorter as well.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号